首页> 外文OA文献 >Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning
【2h】

Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning

机译:分解的内容自适应草图肖像生成   表征学习

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Sketch portrait generation benefits a wide range of applications such asdigital entertainment and law enforcement. Although plenty of efforts have beendedicated to this task, several issues still remain unsolved for generatingvivid and detail-preserving personal sketch portraits. For example, quite a fewartifacts may exist in synthesizing hairpins and glasses, and textural detailsmay be lost in the regions of hair or mustache. Moreover, the generalizationability of current systems is somewhat limited since they usually requireelaborately collecting a dictionary of examples or carefully tuningfeatures/components. In this paper, we present a novel representation learningframework that generates an end-to-end photo-sketch mapping through structureand texture decomposition. In the training stage, we first decompose the inputface photo into different components according to their representationalcontents (i.e., structural and textural parts) by using a pre-trainedConvolutional Neural Network (CNN). Then, we utilize a Branched FullyConvolutional Neural Network (BFCN) for learning structural and texturalrepresentations, respectively. In addition, we design a Sorted Matching MeanSquare Error (SM-MSE) metric to measure texture patterns in the loss function.In the stage of sketch rendering, our approach automatically generatesstructural and textural representations for the input photo and produces thefinal result via a probabilistic fusion scheme. Extensive experiments onseveral challenging benchmarks suggest that our approach outperformsexample-based synthesis algorithms in terms of both perceptual and objectivemetrics. In addition, the proposed method also has better generalizationability across dataset without additional training.
机译:素描人像的生成有利于广泛的应用,例如数字娱乐和执法。尽管为此付出了很多努力,但是对于生成生动且保留细节的个人素描画像仍存在一些问题尚未解决。例如,在合成发夹和眼镜时可能存在许多伪像,并且在头发或胡须区域中可能丢失纹理细节。此外,当前系统的通用性在某种程度上受到限制,因为它们通常需要精心收集示例字典或仔细调整功能部件/组件。在本文中,我们提出了一种新颖的表示学习框架,该框架通过结构和纹理分解生成端到端的照片素描映射。在训练阶段,我们首先使用预先训练的卷积神经网络(CNN)根据输入的图像表示内容(即结构和纹理部分)将其分解为不同的组件。然后,我们利用分支完全卷积神经网络(BFCN)分别学习结构和纹理表示。此外,我们设计了排序匹配均方误差(SM-MSE)度量标准来测量损失函数中的纹理图案。在草图渲染阶段,我们的方法会自动为输入照片生成结构和纹理表示,并通过概率生成最终结果融合方案。在几个具有挑战性的基准上进行的大量实验表明,在感知和客观指标方面,我们的方法均优于基于示例的综合算法。另外,所提出的方法还具有跨数据集的更好的泛化能力,而无需额外的训练。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号